11 research outputs found

    Real-Time Automatic Object Classification and Tracking using Genetic Programming and NVIDIA R CUDA TM

    Get PDF
    Genetic Programming (GP) is a widely used methodology for solving various computational problems. GP's problem solving ability is usually hindered by its long execution times. In this thesis, GP is applied toward real-time computer vision. In particular, object classification and tracking using a parallel GP system is discussed. First, a study of suitable GP languages for object classification is presented. Two main GP approaches for visual pattern classification, namely the block-classifiers and the pixel-classifiers, were studied. Results showed that the pixel-classifiers generally performed better. Using these results, a suitable language was selected for the real-time implementation. Synthetic video data was used in the experiments. The goal of the experiments was to evolve a unique classifier for each texture pattern that existed in the video. The experiments revealed that the system was capable of correctly tracking the textures in the video. The performance of the system was on-par with real-time requirements

    Deep Recurrent Networks for Gesture Recognition and Synthesis

    Get PDF
    It is hard to overstate the importance of gesture-based interfaces in many applications nowadays. The adoption of such interfaces stems from the opportunities they create for incorporating natural and fluid user interactions. This highlights the importance of having gesture recognizers that are not only accurate but also easy to adopt. The ever-growing popularity of machine learning has prompted many application developers to integrate automatic methods of recognition into their products. On the one hand, deep learning often tops the list of the most powerful and robust recognizers. These methods have been consistently shown to outperform all other machine learning methods in a variety of tasks. On the other hand, deep networks can be overwhelming to use for a majority of developers, requiring a lot of tuning and tweaking to work as expected. Additionally, these networks are infamous for their requirement for large amounts of training data, further hampering their adoption in scenarios where labeled data is limited. In this dissertation, we aim to bridge the gap between the power of deep learning methods and their adoption into gesture recognition workflows. To this end, we introduce two deep network models for recognition. These models are similar in spirit, but target different application domains: one is designed for segmented gesture recognition, while the other is suitable for continuous data, tackling segmentation and recognition problems simultaneously. The distinguishing characteristic of these networks is their simplicity, small number of free parameters, and their use of common building blocks that come standard with any modern deep learning framework, making them easy to implement, train and adopt. Through evaluations, we show that our proposed models achieve state-of-the-art results in various recognition tasks and application domains spanning different input devices and interaction modalities. We demonstrate that the infamy of deep networks due to their demand for powerful hardware as well as large amounts of data is an unfair assessment. On the contrary, we show that in the absence of such data, our proposed models can be quickly trained while achieving competitive recognition accuracy. Next, we explore the problem of synthetic gesture generation: a measure often taken to address the shortage of labeled data. We extend our proposed recognition models and demonstrate that the same models can be used in a Generative Adversarial Network (GAN) architecture for synthetic gesture generation. Specifically, we show that our original recognizer can be used as the discriminator in such frameworks, while its slightly modified version can act as the gesture generator. We then formulate a novel loss function for our gesture generator, which entirely replaces the need for a discriminator network in our generative model, thereby significantly reducing the complexity of our framework. Through evaluations, we show that our model is able to improve the recognition accuracy of multiple recognizers across a variety of datasets. Through user studies, we additionally show that human evaluators mistake our synthetic samples with the real ones frequently indicating that our synthetic samples are visually realistic. Additional resources for this dissertation (such as demo videos and public source codes) are available at https://www.maghoumi.com/dissertatio

    Code Park: A New 3D Code Visualization Tool

    Full text link
    We introduce Code Park, a novel tool for visualizing codebases in a 3D game-like environment. Code Park aims to improve a programmer's understanding of an existing codebase in a manner that is both engaging and intuitive, appealing to novice users such as students. It achieves these goals by laying out the codebase in a 3D park-like environment. Each class in the codebase is represented as a 3D room-like structure. Constituent parts of the class (variable, member functions, etc.) are laid out on the walls, resembling a syntax-aware "wallpaper". The users can interact with the codebase using an overview, and a first-person viewer mode. We conducted two user studies to evaluate Code Park's usability and suitability for organizing an existing project. Our results indicate that Code Park is easy to get familiar with and significantly helps in code understanding compared to a traditional IDE. Further, the users unanimously believed that Code Park was a fun tool to work with.Comment: Accepted for publication in 2017 IEEE Working Conference on Software Visualization (VISSOFT 2017); Supplementary video: https://www.youtube.com/watch?v=LUiy1M9hUK

    NVAutoNet: Fast and Accurate 360∘^{\circ} 3D Visual Perception For Self Driving

    Full text link
    Robust real-time perception of 3D world is essential to the autonomous vehicle. We introduce an end-to-end surround camera perception system for self-driving. Our perception system is a novel multi-task, multi-camera network which takes a variable set of time-synced camera images as input and produces a rich collection of 3D signals such as sizes, orientations, locations of obstacles, parking spaces and free-spaces, etc. Our perception network is modular and end-to-end: 1) the outputs can be consumed directly by downstream modules without any post-processing such as clustering and fusion -- improving speed of model deployment and in-car testing 2) the whole network training is done in one single stage -- improving speed of model improvement and iterations. The network is well designed to have high accuracy while running at 53 fps on NVIDIA Orin SoC (system-on-a-chip). The network is robust to sensor mounting variations (within some tolerances) and can be quickly customized for different vehicle types via efficient model fine-tuning thanks of its capability of taking calibration parameters as additional inputs during training and testing. Most importantly, our network has been successfully deployed and being tested on real roads

    Code Park: A New 3D Code Visualization Tool

    No full text
    We introduce Code Park, a novel tool for visualizing codebases in a 3D game-like environment. Code Park aims to improve a programmer\u27s understanding of an existing codebase in a manner that is both engaging and intuitive, appealing to novice users such as students. It achieves these goals by laying out the codebase in a 3D park-like environment. Each class in the codebase is represented as a 3D room-like structure. Constituent parts of the class (variable, member functions, etc.) are laid out on the walls, resembling a syntax-aware \u27wallpaper\u27. The users can interact with the codebase using an overview, and a first-person viewer mode. We conducted two user studies to evaluate Code Park\u27s usability and suitability for organizing an existing project. Our results indicate that Code Park is easy to get familiar with and significantly helps in code understanding compared to a traditional IDE. Further, the users unanimously believed that Code Park was a fun tool to work with

    Gemsketch: Interactive Image-Guided Geometry Extraction From Point Clouds

    No full text
    We introduce an interactive system for extracting the geometries of generalized cylinders and cuboids from single-or multiple-view point clouds. Our proposed method is intuitive and only requires the object\u27s silhouettes to be traced by the user. Leveraging the user\u27s perceptual understanding of what an object looks like, our proposed method is capable of extracting accurate models, even in the presence of occlusion, clutter or incomplete point cloud data, while preserving the original object\u27s details and scale. We demonstrate the merits of our proposed method through a set of experiments on a public RGB-D dataset. We extracted 16 objects from the dataset using at most two views of each object. Our extracted models represent a high degree of visual similarity to the original objects. Further, we achieved a mean normalized Hausdorff distance of 5.66% when comparing our extracted models with the dataset\u27s ground truths

    A Rapid Prototyping Approach To Synthetic Data Generation For Improved 2D Gesture Recognition

    No full text
    Training gesture recognizers with synthetic data generated from real gestures is a well known and powerful technique that can significantly improve recognition accuracy. In this paper we introduce a novel technique called gesture path stochastic resampling (GPSR) that is computationally efficient, has minimal coding overhead, and yet despite its simplicity is able to achieve higher accuracy than competitive, state-of-the-art approaches. GPSR generates synthetic samples by lengthening and shortening gesture subpaths within a given sample to produce realistic variations of the input via a process of nonuniform resampling. As such, GPSR is an appropriate rapid prototyping technique where ease of use, understandability, and efficiency are key. Further, through an extensive evaluation, we show that accuracy significantly improves when gesture recognizers are trained with GPSR synthetic samples. In some cases, mean recognition errors are reduced by more than 70%, and in most cases, GPSR outperforms two other evaluated state-of-the-art methods

    Jackknife: A Reliable Recognizer With Few Samples And Many Modalities

    No full text
    Despite decades of research, there is yet no general rapid prototyping recognizer for dynamic gestures that can be trained with few samples, work with continuous data, and achieve high accuracy that is also modality-agnostic. To begin to solve this problem, we describe a small suite of accessible techniques that we collectively refer to as the Jackknife gesture recognizer. Our dynamic time warping based approach for both segmented and continuous data is designed to be a robust, go-to method for gesture recognition across a variety of modalities using only limited training samples. We evaluate pen and touch, Wii Remote, Kinect, Leap Motion, and sound-sensed gesture datasets as well as conduct tests with continuous data. Across all scenarios we show that our approach is able to achieve high accuracy, suggesting that Jackknife is a capable recognizer and good first choice for many endeavors

    SHREC 2021: Skeleton-based hand gesture recognition in the wild

    No full text
    Gesture recognition is a fundamental tool to enable novel interaction paradigms in a variety of application scenarios like Mixed Reality environments, touchless public kiosks, entertainment systems, and more. Recognition of hand gestures can be nowadays performed directly from the stream of hand skeletons estimated by software provided by low-cost trackers (Ultraleap) and MR headsets (Hololens, Oculus Quest) or by video processing software modules (e.g. Google Mediapipe). Despite the recent advancements in gesture and action recognition from skeletons, it is unclear how well the current state-of-the-art techniques can perform in a real-world scenario for the recognition of a wide set of heterogeneous gestures, as many benchmarks do not test online recognition and use limited dictionaries. This motivated the proposal of the SHREC 2021: Track on Skeleton-based Hand Gesture Recognition in the Wild. For this contest, we created a novel dataset with heterogeneous gestures featuring different types and duration. These gestures have to be found inside sequences in an online recognition scenario. This paper presents the result of the contest, showing the performances of the techniques proposed by four research groups on the challenging task compared with a simple baseline method
    corecore